Parallel Algorithms for the Training Process of a Neural Network-Based System
نویسندگان
چکیده
This paper addresses the problem of developing efficient parallel algorithms for the training procedure of a neural network–based Fingerprint Image Comparison (FIC) system. The target architecture is assumed to be a coarse-grain distributed-memory parallel architecture. Two types of parallelism—node parallelism and training set parallel-ism (TSP)—are investigated. Theoretical analysis and experimental results show that node parallelism has low speedup and poor scalability, while TSP proves to have the best speedup performance. TSP, however, is amenable to a slow convergence rate. To reduce this effect, a modified training set parallel algorithm using weighted contributions of synaptic connections is proposed. Experimental results show that this algorithm provides a fast convergence rate while keeping the best speedup performance obtained. The combination of TSP with node paral-lelism is also investigated. A good performance is achieved by this approach. This provides better scalability with the trade-off of a slight decrease in speedup. The above algorithms are implemented on a 32-node CM-5.
منابع مشابه
Design, Development and Evaluation of an Orange Sorter Based on Machine Vision and Artificial Neural Network Techniques
ABSTRACT- The high production of orange fruit in Iran calls for quality sorting of this product as a requirement for entering global markets. This study was devoted to the development of an automatic fruit sorter based on size. The hardware consisted of two units. An image acquisition apparatus equipped with a camera, a robotic arm and controller circuits. The second unit consisted of a robotic...
متن کاملA Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملA Hybrid Optimization Algorithm for Learning Deep Models
Deep learning is one of the subsets of machine learning that is widely used in Artificial Intelligence (AI) field such as natural language processing and machine vision. The learning algorithms require optimization in multiple aspects. Generally, model-based inferences need to solve an optimized problem. In deep learning, the most important problem that can be solved by optimization is neural n...
متن کاملA Differential Evolution and Spatial Distribution based Local Search for Training Fuzzy Wavelet Neural Network
Abstract Many parameter-tuning algorithms have been proposed for training Fuzzy Wavelet Neural Networks (FWNNs). Absence of appropriate structure, convergence to local optima and low speed in learning algorithms are deficiencies of FWNNs in previous studies. In this paper, a Memetic Algorithm (MA) is introduced to train FWNN for addressing aforementioned learning lacks. Differential Evolution...
متن کاملThe Optimization of the Effective Parameters of the Die in Parallel Tubular Channel Angular Pressing Process by Using Neural Network and Genetic Algorithm Methods
One of reasons that researchers in recent years have tried to produce ultrafine grained materials is producing lightweight components with high strength and reliability. There are disparate methods for production of ultra-fine grain materials,one of which is severe plastic deformation method. Severe plastic deformation method comprises different processes, one of which is Parallel tubular chann...
متن کاملTwo Novel Learning Algorithms for CMAC Neural Network Based on Changeable Learning Rate
Cerebellar Model Articulation Controller Neural Network is a computational model of cerebellum which acts as a lookup table. The advantages of CMAC are fast learning convergence, and capability of mapping nonlinear functions due to its local generalization of weight updating, single structure and easy processing. In the training phase, the disadvantage of some CMAC models is unstable phenomenon...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IJHPCA
دوره 14 شماره
صفحات -
تاریخ انتشار 2000